225 research outputs found

    A PAM method for computing Wasserstein barycenter with unknown supports in D2-clustering

    Full text link
    A Wasserstein barycenter is the centroid of a collection of discrete probability distributions that minimizes the average of the β„“2\ell_2-Wasserstein distance. This paper concerns with the computation of a Wasserstein barycenter under the case where the support points are not pre-specified, which is known to be a severe bottleneck in the D2-clustering due to the large-scale and nonconvexity. We develop a proximal alternating minimization (PAM) method for computing an approximate Wasserstein barycenter, and provide its global convergence analysis. This method can achieve a good accuracy at a reduced computational cost when the unknown support points of the barycenter have low cardinality. Numerical comparisons with the existing representative method on synthetic and real data show that our method can yield a little better objective values within much less computing time, and the computed approximate barycenter renders a better role in the D2-clustering

    Inexact indefinite proximal ADMMs for 2-block separable convex programs and applications to 4-block DNNSDPs

    Full text link
    This paper is concerned with two-block separable convex minimization problems with linear constraints, for which it is either impossible or too expensive to obtain the exact solutions of the subproblems involved in the proximal ADMM (alternating direction method of multipliers). Such structured convex minimization problems often arise from the two-block regroup settlement of three or four-block separable convex optimization problems with linear constraints, or from the constrained total-variation superresolution image reconstruction problems in image processing. For them, we propose an inexact indefinite proximal ADMM of step-size Ο„βˆˆβ€‰β£(0,5+12)\tau\in\!(0,\frac{\sqrt{5}+1}{2}) with two easily implementable inexactness criteria to control the solution accuracy of subproblems, and establish the convergence under a mild assumption on indefinite proximal terms. We apply the proposed inexact indefinite proximal ADMMs to the three or four-block separable convex minimization problems with linear constraints, which are from the duality of the important class of doubly nonnegative semidefinite programming (DNNSDP) problems with many linear equality and/or inequality constraints. Numerical results indicate that the inexact indefinite proximal ADMM with the absolute error criterion has a comparable performance with the directly extended multi-block ADMM of step-size Ο„=1.618\tau=1.618 without convergence guarantee, whether in terms of the number of iterations or the computation time.Comment: 34 pages, 3 figure

    Locally upper Lipschitz of the perturbed KKT system of Ky Fan kk-norm matrix conic optimization problems

    Full text link
    This note is concerned with the nonlinear Ky Fan kk-norm matrix conic optimization problems, which include the nuclear norm regularized minimization problem as a special case. For this class of nonpolyhedral matrix conic optimization problems, under the assumption that a stationary solution satisfies the second-order sufficient condition and the associated Lagrange multiplier satisfies the strict Robinson's CQ, we show that two classes of perturbed KKT systems are locally upper Lipschitz at the origin, which implies a local error bound for the distance from any point in a neighborhood of the corresponding KKT point to the whole set of KKT points.Comment: twenty-seven page

    Linear convergence of the generalized PPA and several splitting methods for the composite inclusion problem

    Full text link
    For the inclusion problem involving two maximal monotone operators, under the metric subregularity of the composite operator, we derive the linear convergence of the generalized proximal point algorithm and several splitting algorithms, which include the over-relaxed forward-backward splitting algorithm, the generalized Douglas-Rachford splitting algorithm and Davis' three-operator splitting algorithm. To the best of our knowledge, this linear convergence condition is weaker than the existing ones that almost all require the strong monotonicity of the composite operator. Withal, we give some sufficient conditions to ensure the metric subregularity of the composite operator. At last, the preliminary numerical performances on some toy examples support the theoretical results

    Error bounds for rank constrained optimization problems and applications

    Full text link
    This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set R= ⁣{X∈X ∣ rank(X)≀κ}\mathcal{R}=\!\big\{X\in\mathbb{X}\ |\ {\rm rank}(X)\le \kappa\big\} and a closed convex set Ξ©\Omega. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any X∈ΩX\in \Omega (X∈XX\in\mathbb{X}) to the feasible set and the solution set, respectively, under the calmness of a multifunction associated to the feasible set at the origin, which is specially satisfied by three classes of common rank constrained optimization problems. As an application of the local Lipschitzian type error bounds, we show that the penalty problem yielded by moving the rank constraint into the objective is exact in the sense that its global optimal solution set coincides with that of the original problem when the penalty parameter is over a certain threshold. This particularly offers an affirmative answer to the open question whether the penalty problem (32) in (Gao and Sun, 2010) is exact or not. As another application, we derive the error bounds of the iterates generated by a multi-stage convex relaxation approach to those three classes of rank constrained problems and show that the bounds are nonincreasing as the number of stages increases

    A corrected semi-proximal ADMM for multi-block convex optimization and its application to DNN-SDPs

    Full text link
    In this paper we propose a corrected semi-proximal ADMM (alternating direction method of multipliers) for the general pp-block (p ⁣β‰₯3)(p\!\ge 3) convex optimization problems with linear constraints, aiming to resolve the dilemma that almost all the existing modified versions of the directly extended ADMM, although with convergent guarantee, often perform substantially worse than the directly extended ADMM itself with no convergent guarantee. Specifically, in each iteration, we use the multi-block semi-proximal ADMM with step-size at least 11 as the prediction step to generate a good prediction point, and then make correction as small as possible for the middle (pβ€‰β£βˆ’β€‰β£2)(p\!-\!2) blocks of the prediction point. Among others, the step-size of the multi-block semi-proximal ADMM is adaptively determined by the infeasibility ratio made up by the current semi-proximal ADMM step for the one yielded by the last correction step. For the proposed corrected semi-proximal ADMM, we establish the global convergence results under a mild assumption, and apply it to the important class of doubly nonnegative semidefinite programming (DNN-SDP) problems with many linear equality and/or inequality constraints. Our extensive numerical tests show that the corrected semi-proximal ADMM is superior to the directly extended ADMM with step-size Ο„=1.618\tau=1.618 and the multi-block ADMM with Gaussian back substitution \cite{HTY12,HY13}. It requires the least number of iterations for 70%70\% test instances within the comparable computing time with that of the directly extended ADMM, and for about 40%40\% tested problems, its number of iterations is only 67%67\% that of the multi-block ADMM with Gaussian back substitution \cite{HTY12,HY13}.Comment: 37 pages, 5 figures. arXiv admin note: text overlap with arXiv:1404.5378 by other author

    Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression

    Full text link
    This paper is concerned with high-dimensional error-in-variables regression that aims at identifying a small number of important interpretable factors for corrupted data from many applications where measurement errors or missing data can not be ignored. Motivated by CoCoLasso due to Datta and Zou \cite{Datta16} and the advantage of the zero-norm regularized LS estimator over Lasso for clean data, we propose a calibrated zero-norm regularized LS (CaZnRLS) estimator by constructing a calibrated least squares loss with a positive definite projection of an unbiased surrogate for the covariance matrix of covariates, and use the multi-stage convex relaxation approach to compute the CaZnRLS estimator. Under a restricted eigenvalue condition on the true matrix of covariates, we derive the β„“2\ell_2-error bound of every iterate and establish the decreasing of the error bound sequence, and the sign consistency of the iterates after finite steps. The statistical guarantees are also provided for the CaZnRLS estimator under two types of measurement errors. Numerical comparisons with CoCoLasso and NCL (the nonconvex Lasso proposed by Poh and Wainwright \cite{Loh11}) demonstrate that CaZnRLS not only has the comparable or even better relative RSME but also has the least number of incorrect predictors identified

    Antitrace maps and light transmission coefficients for a generalized Fibonacci multilayers

    Full text link
    By using antitrace map method, we investigate the light transmission for a generalized Fibonacci multilayers. Analytical results are obtained for transmission coefficients in some special cases. We find that the transmission coefficients possess two-cycle property or six-cycle property. The cycle properties of the trace and antitrace are also obtained.Comment: 8 pages, no figure

    Computation of graphical derivatives of normal cone maps to a class of conic constraint sets

    Full text link
    This paper concerns with the graphical derivative of the normals to the conic constraint g(x)βˆˆβ€‰β£Kg(x)\in\!K, where g ⁣:Xβ†’Yg\!:\mathbb{X}\to\mathbb{Y} is a twice continuously differentiable mapping and KβŠ†YK\subseteq\mathbb{Y} is a nonempty closed convex set assumed to be C2C^2-cone reducible. Such a generalized derivative plays a crucial role in characterizing isolated calmness of the solution maps to generalized equations whose multivalued parts are modeled via the normals to the nonconvex set Ξ“=gβˆ’1(K)\Gamma=g^{-1}(K). The main contribution of this paper is to provide an exact characterization for the graphical derivative of the normals to this class of nonconvex conic constraints under an assumption without requiring the nondegeneracy of the reference point as the papers \cite{Gfrerer17,Mordu15,Mordu151} do.Comment: 28page

    KL property of exponent 1/21/2 for zero-norm composite quadratic functions

    Full text link
    This paper is concerned with a class of zero-norm regularized and constrained composite quadratic optimization problems, which has important applications in the fields such as sparse eigenvalue problems, sparse portfolio problems, and nonnegative matrix factorizations. For this class of nonconvex and nonsmooth problems, we establish the KL property of exponent 1/2 of its objective function under a suitable assumption, and provide some examples to illustrate that the assumption holds
    • …
    corecore